Goto

Collaborating Authors

 Andrews County


The Race to Build the DeepSeek of Europe Is On

WIRED

As Europe's longstanding alliance with the US falters, its push to become a self-sufficient AI superpower has become more urgent. As the relationship between the US and its European allies shows signs of strain, AI labs across the continent are searching for inventive ways to close the gap with American rivals that have so far dominated the field. With rare exceptions, US-based firms outstrip European competitors across the AI production line--from processor design and manufacturing, to datacenter capacity, to model and application development. Likewise, the US has captured a massive proportion of the money pouring into AI, reflected in the performance last year of its homegrown stocks and the growth of its econonmy . The belief in some quarters is that the US-based leaders --Nvidia, Google, Meta, OpenAI, Anthropic, and the like--are already so entrenched as to make it impossible for European nations to break their dependency on American AI, mirroring the pattern in cloud services.


A Multimodal Conversational Agent for Tabular Data Analysis

Awad, Mohammad Nour Al, Ivanov, Sergey, Tikhonova, Olga, Khodnenko, Ivan

arXiv.org Artificial Intelligence

Abstract--Large language models (LLMs) can reshape information processing by handling data analysis, visualization, and interpretation in an interactive, context-aware dialogue with users, including voice interaction, while maintaining high performance. The system lets users query datasets with voice or text instructions and receive answers as plots, tables, statistics, or spoken explanations. Built on LLMs, the suggested design combines OpenAI Whisper automatic speech recognition (ASR) system, Qwen-coder code generation LLM/model, custom sandboxed execution tools, and Coqui library for text-to-speech (TTS) within an agentic orchestration loop. Unlike text-only analysis tools, it adapts responses across modalities and supports multi-turn dialogues grounded in dataset context. In an evaluation of 48 tasks on three datasets, our prototype achieved 95.8% accuracy with model-only generation time under 1.7 seconds (excluding ASR and execution time). A comparison across five LLM sizes (1.5B-32B) revealed accuracy-latency-cost trade-offs, with a 7B model providing the best balance for interactive use. By routing between conversation with user and code execution, constrained to a transparent sandbox, with simultaneously grounding prompts in schema-level context, the T alk2Data agent reliably retrieves actionable insights from tables while making computations verifiable. In the article, except for the T alk2Data agent itself, we discuss implications for human-data interaction, trust in LLM-driven analytics, and future extensions toward large-scale multimodal assistants. Interacting with data often requires programming skills or statistical expertise, creating barriers for managers, analysts, and other non-technical users [1], [2]. Natural language interfaces (NLIs) aim to improve this information seeking process by allowing users to query data conversationally [3], [4]. At the same time, voice interfaces are becoming increasingly common in daily life, yet existing voice assistants remain limited: they can answer factual questions or control devices, but they lack the analytical capabilities needed for meaningful data exploration.